74 research outputs found

    Do we perceive a flattened world on the monitor screen

    Get PDF
    The current model of three-dimensional perception hypothesizes that the brain integrates the depth cues in a statistically optimal fashion through a weighted linear combination with weights proportional to the reliabilities obtained for each cue in isolation (Landy, Maloney, Johnston, & Young, 1995). Even though many investigations support such theoretical framework, some recent empirical findings are at odds with this view (e.g., Domini, Caudek, & Tassinari, 2006). Failures of linear cue integration have been attributed to cue-conflict and to unmodelled cues to flatness present in computer-generated displays. We describe two cue-combination experiments designed to test the integration of stereo and motion cues, in the presence of consistent or conflicting blur and accommodation information (i.e., when flatness cues are either absent, with physical stimuli, or present, with computer-generated displays). In both conditions, we replicated the results of Domini et al. (2006): The amount of perceived depth increased as more cues were available, also producing an over-estimation of depth in some conditions. These results can be explained by the Intrinsic Constraint model, but not by linear cue combination

    Facial Emotions Improve Face Discrimination Learning

    Get PDF
    How visual experience modulates the ability to discriminate faces from one another is still poorly understood. The aim of this study is to investigate whether emotions may favor face discrimination learning. To this purpose, we measured face discrimination thresholds before and after a training phase, where participants are exposed to (task-irrelevant) subtle variations in face images from trial to trial. A task-irrelevant perceptual learning paradigm was used because it closely mimics the learning processes that daily occur, without a conscious intention to learn and without a focused attention on specific facial features. During the four sessions of training, participants performed a contrast-discrimination task on face images. The task-irrelevant features were face images variations along the morphing continuum of facial identity (Identity group) or face images variations along the morphing continuum of emotional expressions (Emotion group). A group of participants (Control group) did not perform the contrast training, but their face discrimination thresholds were measured with the same temporal gap between them as the other two groups. Results indicate a face discrimination improvement only for the Emotion group. Participants in the Emotion Group showed a discrimination improvement when tested with variations along the dimension of identity and with variations along the dimension of expression, even if identity variations were not used during training. The present results suggest a role of emotions in face discrimination learning and show that faces, differently from the other classes of stimuli, may manifest a higher degree of learning transfer

    Recovering slant and angular velocity from a linear velocity field: modeling and psychophysics

    Get PDF
    AbstractThe data from two experiments, both using stimuli simulating orthographically rotating surfaces, are presented, with the primary variable of interest being whether the magnitude of the simulated gradient was from expanding vs. contracting motion. One experiment asked observers to report the apparent slant of the rotating surface, using a gauge figure. The other experiment asked observers to report the angular velocity, using a comparison rotating sphere. The results from both experiments clearly show that observers are less sensitive to expanding than to contracting optic-flow fields. These results are well predicted by a probabilistic model which derives the orientation and angular velocity of the projected surface from the properties of the optic flow computed within an extended time window

    Misperception of rigidity from actively generated optic flow

    Get PDF
    It is conventionally assumed that the goal of the visual system is to derive a perceptual representation that is a veridical reconstruction of the external world: a reconstruction that leads to optimal accuracy and precision of metric estimates, given sensory information. For example, 3-D structure is thought to be veridically recovered from optic flow signals in combination with egocentric motion information and assumptions of the stationarity and rigidity of the external world. This theory predicts veridical perceptual judgments under conditions that mimic natural viewing, while ascribing nonoptimality under laboratory conditions to unreliable or insufficient sensory information\u2014for example, the lack of natural and measurable observer motion. In two experiments, we contrasted this optimal theory with a heuristic theory that predicts the derivation of perceived 3-D structure based on the velocity gradients of the retinal flow field without the use of egomotion signals or a rigidity prior. Observers viewed optic flow patterns generated by their own motions relative to two surfaces and later viewed the same patterns while stationary. When the surfaces were part of a rigid structure, static observers systematically perceived a nonrigid structure, consistent with the predictions of both an optimal and a heuristic model. Contrary to the optimal model, moving observers also perceived nonrigid structures in situations where retinal and extraretinal signals, combined with a rigidity assumption, should have yielded a veridical rigid estimate. The perceptual biases were, however, consistent with a heuristic model which is only based on an analysis of the optic flow
    • …
    corecore